289 research outputs found

    Towards 3D Motion Estimation from Deformable Surfaces

    Get PDF
    Estimating the pose of an imaging sensor is a central research problem. Many solutions have been proposed for the case of a rigid environment. In contrast, we tackle the case of a non-rigid environment observed by a 3D sensor, which has been neglected in the literature. We represent the environment as sets of time-varying 3D points explained by a low-rank shape model, that we derive in its implicit and explicit forms. The parameters of this model are learnt from data gathered by the 3D sensor. We propose a learning algorithm based on minimal 3D non-rigid tensors that we introduce. This is followed by a Maximum Likelihood nonlinear refinement performed in a bundle adjustment manner. Given the learnt environment model, we compute the pose of the 3D sensor, as well as the deformations of the environment, that is, the non-rigid counterpart of pose, from new sets of 3D points. We validate our environment learning and pose estimation modules on simulated and real data

    The Geometry of Dynamic Scenes - On Coplanar and Convergent Linear Motions Embedded in 3D Static Scenes

    Get PDF
    In this paper, we consider structure and motion recovery for scenes consisting of static and dynamic features. More particularly, we consider a single moving uncalibrated camera observing a scene consisting of points moving along straight lines converging to a unique point and lying on a motion plane. This scenario may describe a roadway observed by a moving camera whose motion is unknown. We show that there exist matching tensors similar to fundamental matrices. We derive the link between dynamic and static structure and motion and show how the equation of the motion plane (or equivalently the plane homographies it induces between images) may be recovered from dynamic features only. Experimental results on real images are provided, in particular on a 60-frames video sequence

    Estimating the Pose of a 3D Sensor in a Non-Rigid Environment

    Get PDF
    Estimating the pose of an imaging sensor is a central research problem. Many solutions have been proposed for the case of a rigid environment. In contrast, we tackle the case of a non-rigid environment observed by a 3D sensor, which has been neglected in the literature. We represent the environment as sets of time-varying 3D points explained by a low-rank shape model, that we derive in its implicit and explicit forms. The parameters of this model are learnt from data gathered by the 3D sensor. We propose a learning algorithm based on minimal 3D non-rigid tensors that we introduce. This is followed by a Maximum Likelihood nonlinear refinement performed in a bundle adjustment manner. Given the learnt environment model, we compute the pose of the 3D sensor, as well as the deformations of the environment, that is, the non-rigid counterpart of pose, from new sets of 3D points. We validate our environment learning and pose estimation modules on simulated and real data

    Groupwise Geometric and Photometric Direct Image Registration

    Get PDF
    Image registration consists in estimating geometric and photometric transformations that align two images as best as possible. The direct approach consists in minimizing the discrepancy in the intensity or color of the pixels. The inverse compositional algorithm has been recently proposed for the direct estimation of groupwise geometric transformations. It is efficient in that it performs several computationally expensive calculations at a pre-computation phase. We propose the dual inverse compositional algorithm which deals with groupwise geometric and photometric transformations, the latter acting on the value of the pixels. Our algorithm preserves the efficient pre-computation based design of the original inverse compositional algorithm. Previous attempts at incorporating photometric transformations to the inverse compositional algorithm spoil this property. We demonstrate our algorithm on simulated and real data and show the improvement in computational efficiency compared to previous algorithms

    Simultaneous Image Registration and Monocular Volumetric Reconstruction of a fluid flow

    Get PDF
    We propose to combine image registration and volumetric reconstruction from a monocular video of a draining off Hele-Shaw cell filled with water. A Hele-Shaw cell is a tank whose depth is small (e.g. 1 mm) compared to the other dimensions (e.g. 400 800 mm2). We use a technique known as molecular tagging which consists in marking by photobleaching a pattern in the fluid and then tracking its deformations. The evolution of the pattern is filmed with a camera whose principal axis coincides with the depth of the cell. The velocity of the fluid along this direction is not constant. Consequently,tracking the pattern cannot be achieved with classical methods because what is observed is the integration of the marked particles over the entire depth of the cell. The proposed approach is built on top of classical direct image registration in which we incorporate a volumetric image formation model. It allows us to accurately measure the motion and the velocity profiles for the entire volume (including the depth of the cell) which is something usually hard to achieve. The results we obtain are consistent with the theoretical hydrodynamic behaviour for this flow which is known as the laminar Poiseuille flow

    Feature-Based Estimation of Radial Basis Mappings for Non-Rigid Registration

    Get PDF
    We study the challenging problem of registering images of a non-rigid surface by estimating a Radial Basis Mapping from feature matches. We cast the problem as a Maximum Likelihood Estimation coupled with nested model selection. We propose an algorithm based on dynamically inserting centres and refining the transformation parameters under the control of a selection model criterion. We validate the algorithm using extensive simulations and by building on recent feature extraction and matching techniques, we report convincing results on real data

    Fonctions de déformation image produit tensoriel généralisées

    Get PDF
    National audienceThe inter-image flow field is often modeled by some parametric warp function. Nearly all warps in the literature are based on a linear combination of control points, as for example the Free-Form Deformation (FFD) warp which uses the tensor-product of B-splines. It has been recently shown that the FFD warp models the affine projection of a deforming surface. This is also the case for all tensorproduct warps and this limits the extent of deformations that can be modeled by these warps. We present the Generalized Tensor-Product (GTP) warps. These model the affine and perspective projection of a rigid and a deforming surface in a generic manner. They include the FFD warp and the recent NURBS warp as special cases. We also propose a new kind of warp that is simpler than the perspective warp using an affine function to model the surface's depth. Experimental results are reported for simulated and real data, showing how our GTP warps improve on existing ones.Le flot optique de déformation image est souvent modélisé par une fonction paramétrique appelée warp. Presque tous les warps de la littérature sont définis comme une combinaison linéaire de points de contrôle, comme le warp Free- Form Deformation (FFD) qui utilise un produit tensoriel de B-splines. Il a été montré récemment que le warp FFD modélise la projection affine d'une surface déformable. Nous présentons dans cet article les warps Generalized Tensor-Product (GTP). Ils permettent de modéliser de manière générique la projection affine ou perspective d'une surface rigide ou déformable. Le warp FFD et le récent warp NURBS en sont deux instances. Nous proposons également un nouveau type de warp plus simple que le warp NURBS, utilisant une fonction affine pour modéliser la profondeur. Nous apportons des résultats expérimentaux sur données simulées et réelles montrant les améliorations de nos warps GTP par rapport aux warps existants

    Affine Approximation for Direct Batch Recovery of Euclidean Motion From Sparse Data

    Get PDF
    We present a batch method for recovering Euclidian camera motion from sparse image data. The main purpose of the algorithm is to recover the motion parameters using as much of the available information and as few computational steps as possible. The algorithmthus places itself in the gap between factorisation schemes, which make use of all available information in the initial recovery step, and sequential approaches which are able to handle sparseness in the image data. Euclidian camera matrices are approximated via the affine camera model, thus making the recovery direct in the sense that no intermediate projective reconstruction is made. Using a little known closure constraint, the FA-closure, we are able to formulate the camera coefficients linearly in the entries of the affine fundamental matrices. The novelty of the presented work is twofold: Firstly the presented formulation allows for a particularly good conditioning of the estimation of the initial motion parameters but also for an unprecedented diversity in the choice of possible regularisation terms. Secondly, the new autocalibration scheme presented here is in practice guaranteed to yield a Least Squares Estimate of the calibration parameters. As a bi-product, the affine camera model is rehabilitated as a useful model for most cameras and scene configurations, e.g. wide angle lenses observing a scene at close range. Experiments on real and synthetic data demonstrate the ability to reconstruct scenes which are very problematic for previous structure from motion techniques due to local ambiguities and error accumulation
    • …
    corecore